7 research outputs found

    Demand-Response Based Energy Advisor for Household Energy Management

    Get PDF
    Home energy management systems (HEMS) are set to play a key role in the future smart grid (SG). HEMS concept enables residential customers to actively participate in demand response programs (DR) to control their energy usage, reduce peak demand and therefore contribute to improve the performance and reliability of the grid. The aim of this paper is to propose an energy management strategy for residential end-consumers. In this framework, a demand response strategy is developed to reduce home energy consumption. The proposed algorithm seeks to minimise peak demand by scheduling household appliances operation and shifting controllable loads during peak hours, when electricity prices are high, to off-peak periods, when electricity prices are lower without affecting the customer’s preferences. The overall system is simulated using MATLAB/Simulink and the results demonstrate the effectiveness of the proposed control strategy in managing the daily household energy consumption.Peer reviewe

    Demand Response Strategy Based on Reinforcement Learning and Fuzzy Reasoning for Home Energy Management

    Get PDF
    As energy demand continues to increase, demand response (DR) programs in the electricity distribution grid are gaining momentum and their adoption is set to grow gradually over the years ahead. Demand response schemes seek to incentivise consumers to use green energy and reduce their electricity usage during peak periods which helps support grid balancing of supply-demand and generate revenue by selling surplus of energy back to the grid. This paper proposes an effective energy management system for residential demand response using Reinforcement Learning (RL) and Fuzzy Reasoning (FR). RL is considered as a model-free control strategy which learns from the interaction with its environment by performing actions and evaluating the results. The proposed algorithm considers human preference by directly integrating user feedback into its control logic using fuzzy reasoning as reward functions. Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to schedule the operation of smart home appliances by shifting controllable appliances from peak periods, when electricity prices are high, to off-peak hours, when electricity prices are lower without affecting the customer’s preferences. The proposed approach works with a single agent to control 14 household appliances and uses a reduced number of state-action pairs and fuzzy logic for rewards functions to evaluate an action taken for a certain state. The simulation results show that the proposed appliances scheduling approach can smooth the power consumption profile and minimise the electricity cost while considering user’s preferences, user’s feedbacks on each action taken and his/her preference settings. A user-interface is developed in MATLAB/Simulink for the Home Energy Management System (HEMS) to demonstrate the proposed DR scheme. The simulation tool includes features such as smart appliances, electricity pricing signals, smart meters, solar photovoltaic generation, battery energy storage, electric vehicle and grid supply.Peer reviewe

    Demand Response Management and Control Strategies for Integrated Smart Electricity Networks

    Get PDF
    Demand Response (DR) programs are being introduced by some electricity grid operators as resource options for curtailing and reducing the demand of electricity during certain time periods for balancing supply and demand. DR is considered as a class of demand-side management programs, where utilities offer incentives to end-users to reduce their power consumption during peak periods. DR is, indeed, a promising opportunity for consumers to control their energy usage in response to electricity tariffs or other incentives from their energy suppliers. Thus, successful execution of a DR program requires the design of efficient algorithms and strategies to be used in the utility grid to motivate end-users to actively engage in residential DR. This thesis studies DR management using machine learning techniques such as Reinforcement Learning (RL), Fuzzy Logic (FL) and Neural Networks (NN) to develop a Home Energy Management System (HEMS) for customers, construct an energy customer behaviour framework, investigate the integration of Electrical Vehicles (EVs) into DR management at the home level and the provision of ancillary services to the utility grid such as Frequency Regulation (FR), and build effective pricing strategies for Peer-to-Peer (P2P) energy trading. In this thesis, we firstly proposed a new and effective algorithm for residential energy management system using Q-learning method to minimise the electricity bills and maximise the user’s satisfaction. The proposed DR algorithm aims to schedule household appliances considering dynamic electricity prices and different household power consumption patterns. Moreover, a human comfort-based control approach for HEMS has been developed to increase the user’s satisfaction as much as possible while responding to DR schemes. The simulation results presented in this Chapter showed that the proposed algorithm leads to minimising energy consumption, reducing household electricity bills, and maximising the user’s satisfaction. Secondly, with the increasing electrification of vehicles, emerging technologies such as Vehicle-to-Grid (V2G) and Vehicle-to-Home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, EVs become as distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. Therefore, we proposed an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL) to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations studies are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue. In addition, the potential benefits of EVs battery and V2G technology to provide grid frequency response services have also been investigated. We have designed an optimal real-time V2G control strategy for EVs to perform supplementary frequency regulation using Deep Deterministic Policy Gradient (DDPG). The main feature that distinguishes the proposed approach from previous related works is that the scheduled charging power of an individual EV is optimally tracked and adjusted in real-time to fulfil the charging demand of EV's battery at the plug-out time without using the forced charging technique to maximise the frequency regulation capacity. Finally, a Peer-to-Peer (P2P) model for energy transaction in a community microgrid has been proposed. The concept of P2P energy trading can promote the implementation of DR by providing consumers with greater control over their energy usage, incentivising them to manage their energy consumption patterns in response to changes in energy supply and demand. It also stimulates the adoption of renewable energy sources. The proposed P2P energy-sharing mechanism for a residential microgrid with price-based DR is designed to engage individual customers to participate in energy trading and ensures that not a single household would be worse off. The proposed pricing mechanism is compared with three popular P2P energy sharing models in the literature namely the Supply and Demand Ratio (SDR), Mid-Market Rate (MMR) and Bill Sharing (BS) considering different types of peers equipped with solar Photovoltaic (PV) panels, EVs, and domestic energy storage systems. The proposed P2P framework has been applied to a community consisting of 100 households and the simulation results demonstrate fairness and substantial energy cost saving/revenue among peers. The P2P model has also been assessed under the physical constrains of the distribution network

    A Dynamic Peer-to-Peer Electricity Market Model for a Community Microgrid with Price-Based Demand Response

    Get PDF
    © 2023 IEEE. This is the accepted manuscript version of an article which has been published in final form at https://10.1109/TSG.2023.3246083Peer-to-Peer (P2P) energy sharing enables prosumers within a community microgrid to directly trade their local energy resources such as solar photovoltaic (PV) panels, small-scale wind turbines, electric vehicle battery storage among each other based on an agreed cost-sharing mechanism. This paper addresses the energy cost minimization problem associated with P2P energy sharing among smart homes which are connected in a residential community. The contribution of this paper is threefold. First, an effective Home Energy Management System (HEMS) is proposed for the smart homes equipped with local generation such as rooftop solar panels, storage and appliances to achieve the demand response (DR) objective. Second, this paper proposes a P2P pricing mechanism based on the dynamic supply-demand ratio and export-import retail prices ratio. This P2P model motivates individual customers to participate in energy trading and ensures that not a single household would be worse off. Finally, the performance of the proposed pricing mechanism, is compared with three popular P2P sharing models in the literature namely the Supply and Demand Ratio (SDR), Mid-Market Rate (MMR) and bill sharing (BS) considering different types of peers equipped with solar panels, electric vehicle, and domestic energy storage system. The proposed P2P framework has been applied to a community consisting of 100 households and the simulation results demonstrate fairness and substantial energy cost saving/revenue among peers. The P2P model has also been assessed under the physical constrains of the distribution network.Peer reviewe

    Optimal Vehicle-to-Grid Control for Supplementary Frequency Regulation Using Deep Reinforcement Learning

    No full text
    © 2022 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND licence. https://doi.org/10.1016/j.epsr.2022.108949The expanding Electric Vehicle (EV) market presents a new opportunity for electric vehicles to deliver a wide range of valuable grid services. Indeed, the emerging Vehicle-to-Grid (V2G) technology with bi-directional flow of power provides the grid with access to mobile energy storage for demand response, frequency regulation and balancing of the local distribution system. This reduces electricity costs at peak hours and can be profitable for customers, network operators and energy retailers. In this paper, an optimal V2G control strategy using Deep Reinforcement Learning (DRL) is proposed to simultaneously maximise the benefits of EV owners and aggregators while fulfilling the driving needs of EV owners. In the proposed DRL-based V2G control strategy, a Deep Deterministic Policy Gradient (DDPG) agent is used to dynamically adjust the V2G power scheduling to satisfy the driving demand of EV users and simultaneously perform frequency regulation tasks. The proposed V2G control scheme is tested on a two-area power system undergoing frequency deviations. The results showed that the proposed V2G control leads to a better frequency deviation reduction and improved Area Control Error (ACE), while satisfying the charging demands of EVs as compared to other strategies.Peer reviewe

    User Comfort-Oriented Home Energy Management System Under Demand Response

    No full text
    © 2023 IEEE. This is the accepted manuscript version of a conference which has been published in final form at https://doi.org/10.1109/GlobConET56651.2023.10150106Home Energy Management Systems (HEMS) play a crucial role in empowering residential consumers to actively engage in Demand Response (DR) programs by controlling their consumption and reducing or shifting their consumption outside periods of grid stress in response to financial rewards such as tax breaks or rebates. However, DR trials across the world have shown that most of the customers remain reluctant to enroll into these programs due to undesirable power disruptions and strict guidelines imposed by utilities, resulting in inconvenience for users. In this paper, a human comfort-based control approach is proposed for HEMS to stimulate the active participation of residential customers in DR. The aim of this proposed algorithm is to minimize the household load demands by scheduling the operation of household appliances and shifting controllable loads during periods when electricity prices are high, to off-peak periods, when electricity prices are lower without compromising the customer's preferences. The simulation results demonstrate the effectiveness of the proposed DR strategy in managing the daily household energy consumption and improving the occupants' comfort.Peer reviewe

    Electrical Vehicle Grid Integration for Demand Response in Distribution Networks Using Reinforcement Learning

    Get PDF
    © 2021 The Authors. IET Electrical Systems in Transportation published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology. This is an open access article under the terms of the Creative Commons Attribution License, https://creativecommons.org/licenses/by/4.0/Most utilities across the world already have demand response (DR) programs in place to incentive consumers to reduce or shift their electricity consumption from peak periods to off-peak hours usually in response to financial incentives. With the increasing electrification of vehicles, emerging technologies such as vehicle-to-grid (V2G) and vehicle-to-home (V2H) have the potential to offer a broad range of benefits and services to achieve more effective management of electricity demand. In this way, electric vehicles (EV) become distributed energy storage resources and can conceivably, in conjunction with other electricity storage solutions, contribute to DR and provide additional capacity to the grid when needed. This paper proposes an effective DR approach for V2G and V2H energy management using Reinforcement Learning (RL). Q-learning, a RL strategy based on a reward mechanism, is used to make optimal decisions to charge or delay the charging of the EV battery pack and/or dispatch the stored electricity back to the grid without compromising the driving needs. Simulations are presented to demonstrate how the proposed DR strategy can effectively manage the charging/discharging schedule of the EV battery and how V2H and V2G can contribute to smooth the household load profile, minimise electricity bills and maximise revenue.Peer reviewe
    corecore